Skip to content

fix(nzb): handle gzipped NZBs end-to-end; centralize in internal/nzbfile#533

Merged
javi11 merged 4 commits intomainfrom
fix/nzb-gzip-handling
Apr 24, 2026
Merged

fix(nzb): handle gzipped NZBs end-to-end; centralize in internal/nzbfile#533
javi11 merged 4 commits intomainfrom
fix/nzb-gzip-handling

Conversation

@javi11
Copy link
Copy Markdown
Owner

@javi11 javi11 commented Apr 24, 2026

Summary

Fixes three user-visible failures introduced with the .nzb.gz persistence feature, and centralizes all gzip NZB I/O in a new internal/nzbfile package so new call sites can't reintroduce the same bugs.

Bugs fixed

  • Download 404 for gzipped NZBs. GET /api/queue/{id}/download returned NOT_FOUND / "The NZB file no longer exists on disk" whenever the DB-stored NzbPath extension drifted from what was actually on disk (legacy .nzb rows against gzipped files, or vice versa). The handler now calls nzbfile.ResolveOnDisk, which falls back to the .gz-toggled variant before 404-ing.
  • SABnzbd fallback XML error U+001F. SABnzbdClient.SendNZBFile streamed raw gzip bytes into the multipart upload; external SABnzbd rejected them (XML syntax error on line 2: illegal character code U+001F). Now opens via nzbfile.Open so the upload is plain NZB XML and the uploaded filename strips .gz.
  • Collision suffix broke .nzb.gz compound extension. ensurePersistentNzb used filepath.Ext + TrimSuffix, which treated .nzb.gz as .gz, producing movie.nzb_1.gz. The new nextCollisionPath helper uses nzbtrim.TrimNzbExtension and yields movie_1.nzb.gz, keeping the file a valid NZB for downstream scanners and the download handler.

Refactor

New internal/nzbfile package owns gzip NZB I/O: Open (transparent decompress), Compress, IsGzipped, PlainFilename, GzExtension, and ResolveOnDisk. All existing callers now go through it:

  • internal/api/queue_handlers.go — download handler (streams via io.Copy(c.Response().BodyWriter(), rc) rather than buffering with io.ReadAll).
  • internal/sabnzbd/client.go — external SABnzbd fallback.
  • internal/importer/service.go — persist, size calculation, metadata regen, collision handling.
  • internal/importer/processor.go, internal/importer/migration.go — use nzbfile.Open/nzbfile.Compress.

The old internal/importer/nzb_file.go/_test.go helpers were deleted; their logic moved into internal/nzbfile. RegenerateMetadata was simplified to use nzbtrim.HasNzbExtension + nzbtrim.TrimNzbExtension instead of an inline switch. Unused PlainExtension constant and a dead _ = dir line in the test were removed.

Test plan

  • go build ./... passes
  • go vet ./... passes
  • go test -race ./internal/nzbfile/ ./internal/importer/ ./internal/api/ passes
  • New unit tests: internal/nzbfile/nzbfile_test.go (Open/Compress roundtrip, ResolveOnDisk drift in both directions, IsGzipped/PlainFilename case preservation) and internal/importer/collision_test.go (table-driven nextCollisionPath including compound .nzb.gz, uppercase, taken-suffix skipping, plain .nzb)
  • Manual: upload two identically-named NZBs → second persists as name_1.nzb.gz
  • Manual: GET /api/queue/{id}/download for a gzipped item → 200, body is plain NZB XML, Content-Disposition filename ends in .nzb
  • Manual: trigger SABnzbd fallback for a gzipped queue item → external SABnzbd accepts the upload (no U+001F error)

javi11 added 4 commits April 24, 2026 13:32
Three related bugs from the .nzb.gz persistence change were producing
user-visible failures:

- Download handler returned 404 for queue items whose stored NzbPath did
  not match the on-disk extension variant.
- External SABnzbd fallback uploaded raw gzip bytes, which SABnzbd
  rejected with "XML syntax error on line 2: illegal character code
  U+001F".
- Collision suffixing treated ".nzb.gz" as a single-extension ".gz",
  producing names like "movie.nzb_1.gz" that broke downstream scanners
  and the download handler's suffix check.

Consolidate gzip NZB I/O in a new internal/nzbfile package (Open,
Compress, IsGzipped, PlainFilename, GzExtension, ResolveOnDisk) and
route all callers through it: API download handler, importer service
(persist + size calc + metadata regen), processor, migration, and the
SABnzbd client. Delete the importer-local nzb_file.go helpers and move
their tests into the new package. Download now streams via
io.Copy(c.Response().BodyWriter(), rc) instead of buffering the whole
decompressed payload. Collision logic moves into a testable
nextCollisionPath helper using nzbtrim.TrimNzbExtension.
Replace filesystem collision-suffix scanning with a deterministic
"<base>_<queue_id>.nzb.gz" scheme so duplicate names are impossible by
construction. AddToQueue now inserts the DB row first (assigning
item.ID), then moves/compresses the file; on persistence failure the
row is rolled back via RemoveFromQueue so the queue can't leak orphan
entries pointing at temp paths.

Drops the nextCollisionPath helper and its table-driven test — the
problem it solved no longer exists.
…gle to Import

User-facing changes:
- Queue download button now surfaces backend errors via a toast instead
  of swallowing them silently. For completed items a 404 shows a softer
  "NZB file already removed" info toast, since the file was almost
  certainly cleaned up post-import.
- The "delete NZB after import" option moves from the Metadata section
  (labelled "Aggressive Cleanup") to the Import section where users
  actually look for import-lifecycle options. Renamed to a clearer
  "Delete NZB After Import" with an explicit note that downloads will
  no longer work after this runs.

Config migration:
- DeleteCompletedNzb moves from MetadataConfig to ImportConfig in Go.
- Config.ShouldDeleteCompletedNzb() prefers the new import.* key but
  falls back to the legacy metadata.* key so existing config files keep
  working without manual edits.
- Download-cleanup no longer clears nzb_path in the DB, so the queue
  list keeps displaying the original filename.
…sion

- Delete the on-disk NZB whenever a queue row is removed (single delete,
  bulk delete, clear completed/failed/pending, system cleanup, system
  stats reset). Repository clear functions now return the deleted paths
  alongside the count; BulkOperationResult gained a DeletedPaths slice.
  A new Server.removeQueueNzbFiles helper handles the actual os.Remove
  with missing-file tolerance and warn-on-other-errors.
- Switch persisted .nzb.gz compression from gzip.BestSpeed to
  gzip.BestCompression. Imports run once; smaller on-disk files are
  worth the one-time CPU cost. Existing files remain readable.
@javi11 javi11 merged commit 3f1c6f9 into main Apr 24, 2026
2 checks passed
@javi11 javi11 deleted the fix/nzb-gzip-handling branch April 24, 2026 13:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant